A while back, I read an interesting article about ITER’s cyrolines which got me thinking about how computers simulate temperature. I’ve used Fusion 360’s 3D thermal simulation tools and wondered if I could create something similar.
Modelling Temperature: Fouriers Law of Thermal Conduction
Fouriers law reveals itself in a terse equation:
$$\vec{q} = -k \nabla T $$ $$\begin{aligned}\vec{q} &\textnormal{ \enspace Local heat flux density} & \textnormal{W m}^{-2}\\ k & \textnormal{ \enspace Materials conductivity}&\textnormal{Wm}^{-1}\textnormal{K}^{-1} \\ \nabla T & \textnormal{ \enspace Temperature gradient} & \textnormal{K m}^{-1}\end{aligned}$$ Fourier’s law states that the rate of heat transfer through a material is proportional to the negative gradient of the temperature and the cross sectional area.
It is a small step up from Newton’s law of cooling and allows for reasonably accurate models of different materials. With its added generality, we can scale Fouriers law to apply to any number of elements. There is one caveat — Fourier’s law only tells us how energy is transferred at a specific instant. We have to perform a summation to find a given element’s overall energy at a given time.1
More explicitly, this relation can be simplified as the integral of:
$$\frac{dE}{dt} = \vec{q} A$$
The temperature of a given element is determined by its energy divided by some sort of “heat capacity”. This heat capacity represents the energy required to raise temperature of a given element by one degree Celsius.
$$T_n^{t} = E_n^{t} \div C_n$$
For generality, we introduce a new syntax for referring to elements. In this way, the temperature of the element at index \(n\) at time \(t\) is \(T_n^t\). In the cases where the property is constant with respect to time, the superscript can be omitted.
This is all plain sailing — temperature is property of one element. The implementation isn’t quite so clean when dealing with energy transfer, which is a shared property between a pair of elements.
For the time being, lets consider the interactions between two cubic elements of side length \(s\).
Considering \(E_1\), we have:
$$\frac{dE}{dt} = A\vec{q} $$ $$\frac{E_1^{t + \delta t} - E_1^t}{\delta t} = s^2\vec{q} $$
The logic for \(E_0\) differs subtly. Since the vector \(q\) points away from \(E_0\), we include a negative sign, producing the equations:
$$E_0^{t + \delta t} = E_0^t - s^2 \vec{q} \delta t$$ $$E_1^{t + \delta t} = E_1^t + s^2 \vec{q} \delta t$$
This equation meets both of our sanity checks.
-
\(E_0^t\) is dimensionally equal to \(s^2 \vec{q} \delta t\).
-
Energy is conserved since we can derive: $$E_0^t + E_1^t = E_1^{t + \delta t} + E_1^{t + \delta t}$$
Reasonably satisfied that our equation holds true, we can move on to calculating \(\vec{q}\). Consider the situation where the two elements are metal and glass respectively:
What is the thermal conductivity between the metal and glass here? We can approximate it to be the average conductivity of metal and glass such that:
$$\vec{q} = - \frac{k_1 + k_0}{2} \times \frac{T_1^t - T_0^t}{s}$$
Putting everything together we have:
$$T_n^{t} = E_n^{t} \div C_n$$ $$E_0^{t + \delta t} = E_0^t - s^2 \vec{q} \delta t$$ $$E_1^{t + \delta t} = E_1^t + s^2 \vec{q} \delta t$$
With these equations, we can calculate the energies and temperatures of all elements.
Implementation
With our calculation of \(\vec{q}\), we have all we need to implement this at large. It is fairly trivial to simulate planar temperature flow, but care needs to be taken to produce an efficient program.
Currently, our program stores the following values for each individual element: $$\begin{aligned}E & \textnormal{ \enspace Energy}\\ T & \textnormal{ \enspace Temperature} \\ C & \textnormal{ \enspace Heat capacity}\\ k & \textnormal{ \enspace Conductivity}\end{aligned}$$
Each of these have to be implemented in the same numeric type.
Picking the most frugal type –float32
– requires 128 bits per simulation element.
For a reasonable simulation size of 600 by 600 elements, our elements require \(600^2 \times 32 \times 4\) bits or 5.76 Megabytes.
This is far too large to be stored within an L3 cache and must be stored in the DRAM, so we can expect the limiting factor for our program to be hard drive I/O.
Floating point arithmetic is also somewhat expensive, so care should be taken to minimize the number of floating point operations (FLOPS) required per iteration.
The first thing to notice is that many of the properties of elements in our simulation are repeated. Literally thousands of elements will have the same heat capacity and thermal conductivity, so storing this information for each element is ineffective. The solution is to encode the material that each element represents. Since there are a smaller number of materials, this heat capacity and thermal conductivity of each element can be accessed in a faster and more memory efficient manner.
Is it more efficient to calculate or to store the temperature of each element? Multiplying by a heat capacity takes around 10 clock cycles, but the real delay is the 35 clock cycle delay in finding the energy of an element. 2 But wait, the energy of each element is known when simulating thermal conduction, so in four out of five cases it takes only 10 cycles to determine the temperature. This is clearly faster than looking up the temperature at each time at a flat rate of 35 cycles per access. Hence, the temperature of each element does not need to be stored.
Putting all of this together, our program stores two arrays, one indexed by element position, the other by material type:
$$\underset{\textbf{large domain, slow lookup}}{\textnormal{Function of Position}} \begin{cases}E & \textnormal{Energy}\\ M & \textnormal{Material}\end{cases} $$ $$\underset{\textbf{small domain, fast lookup}}{\textnormal{Function of Material}} \begin{cases} C & \textnormal{Heat capacity}\\ k & \textnormal{Conductivity}\end{cases} $$
Writing code for the material
type is trivial thanks to Golang’s iota
constant:
type material byte
const (
Aluminium material = iota
Glass
Diamond
MaxMaterials
)
var (
heatCapacity = [MaxMaterials]float32{}
conductivity = [MaxMaterials]float32{}
)
func initMaterials() {
// Isobaric (volumetric) Heatcapacities
heatCapacity[Aluminium] = 2.422 // J / cm^3 K
heatCapacity[Glass] = 2.1 // J / cm^3 K
heatCapacity[Diamond] = 1.782 // J / cm^3 K
// Thermal conductivies
conductivity[Aluminium] = 205.0 // W / cm K
conductivity[Glass] = 0.8 // W / cm K
conductivity[Diamond] = 1000 // W / cm K
}
We need another array to represent each actual simulation element with a corresponding material and energy.
type Element struct {
material material
energy float32
}
func (E Element) Temperature() float32 {
return E.energy / heatCapacity[E.material]
}
var elementArr = [heatMapH][heatMapW]Element{}
There’s still plenty of optimizations to be made. Floating point multiplication is far more efficient than division, so we can store the reciprocal of the heat capacity and use that for calculations.
func (E Element) Temperature() float32 {
return E.energy * recip_heatCapacity[E.material]
}
var (
elementArr = [heatMapH][heatMapW]Element{}
heatCapacity = [MaxMaterials]float32{}
recip_heatCapacity = [MaxMaterials]float32{}
conductivity = [MaxMaterials]float32{}
)
func initMaterials() {
// Isobaric (volumetric) Heatcapacities
heatCapacity[Aluminium] = 2.422 // J / cm^3 K
heatCapacity[Glass] = 2.1 // J / cm^3 K
heatCapacity[Diamond] = 1.782 // J / cm^3 K
for i := range heatCapacity {
recip_heatCapacity[i] = 1 / heatCapacity[i]
}
// Thermal conductivies
conductivity[Aluminium] = 205.0 // W / cm K
conductivity[Glass] = 0.8 // W / cm K
conductivity[Diamond] = 1000 // W / cm K
}
We can calculate constants ahead of time and encode them as a function of material:
$$E_0^{t + \delta t} = E_0^t + \overbrace{\frac{s(k_1 + k_0)}{2}}^{\textnormal{constants}} \times (T_1^t - T_0^t) \times \delta t$$
This takes a little more ingenuity. We can represent each material using a bitwise flag like this:
$$\begin{aligned}\textnormal{Aluminium - }\tt{0001} \\\textnormal{Glass - }\tt{0010} \\\textnormal{Diamond - }\tt{0100} \\ \end{aligned}$$
These values for material
can still be used as indices to look up values in an array.
In this way, heatCapacity
will still work as normal, but we can add new functionality for conductivity
.
We can use the bitwise and
operator to lookup the value of our constant given two materials:
$$\textnormal{Diamond or Glass }\tt{= 0110}$$ $$\tt{conductivity[0110] = }\mathrm{\frac{s(k_\textnormal{Diamond} + k_\textnormal{Glass})}{2}}$$
This system is slightly wasteful in memory, but works admirably. A bitwise “or” is a trivial operation and the lookup cuts out three FLOPS in the biggest bottleneck of the program.
type material byte
const (
Aluminium material = 1 << iota
Glass
Diamond
MaxMaterials
)
var (
elementArr = [heatMapH][heatMapW]Element{}
heatCapacity = [MaxMaterials]float32{}
recip_heatCapacity = [MaxMaterials]float32{}
conductivity = [MaxMaterials]float32{}
)
func initMaterials() {
// Isobaric (volumetric) Heatcapacities
heatCapacity[Aluminium] = 2.422 // J / cm^3 K
heatCapacity[Glass] = 2.1 // J / cm^3 K
heatCapacity[Diamond] = 1.782 // J / cm^3 K
for i := range heatCapacity {
recip_heatCapacity[i] = 1 / heatCapacity[i]
}
// Thermal conductivies
conductivity[Aluminium] = 205.0 // W / cm K
conductivity[Glass] = 0.8 // W / cm K
conductivity[Diamond] = 1000 // W / cm K
// iterate through all possible pairs of material elements
for i := material(1); i < MaxMaterials; i <<= 1 {
for j := material(i) << 1; j < MaxMaterials; j <<= 1 {
// the material conductivity between i and j is equal to
// the average conductivities of i and j
conductivity[i|j] = conductivity[i] + conductivity[j]
conductivity[i|j] /= 2
}
}
for i := range conductivity {
// premultiply by sidelength of cubic element
conductivity[i] *= 0.01
}
}
To implement the rest of our program we need to define the use cases. How does the user input data? What use cases is this program targeted for? How are results presented?
In a commercial setting, accurately determining the steady state temperature is key, but other (more sophisticated) programs can already do that.
Thus, my goal was to create a sandbox where simple ideas can be rapidly prototyped with instant feedback.
I needed keyboard and mouse input from users and a window to display the test piece in real time, so I programmed the rest of the project using the sdl
library.
The crux of the implementation lies with its event driven architecture.
The user input is parsed by the main thread, converted into a HeatMap_Event
and sent through HeatMap_Event_chan
.
The runHeatMap
function reads events from HeatMap_Event_chan
and executes each event.
This arrangement is a little convoluted, but remedies the pesky race conditions that would be present if the main thread directly executed user input.
Even so, the code for runHeatMap
is remarkably terse and comprehensible:
func runHeatMap(func_wg *sync.WaitGroup) {
defer func_wg.Done()
var channel_empty bool
var event HeatMap_Event
for program_running {
channel_empty = true
for channel_empty {
// until the channel is empty
select {
case event = <-HeatMap_Event_chan:
// execute an event from the channel
event.Draw_to_Arr()
default:
channel_empty = false
}
}
switch program_view {
// depending on the selected view, either
case TEMPERATURE_VIEW:
heatFlow() // model heat flow
showTemperature() // draw temperatures to screen
case MATERIAL_VIEW:
showMaterial() // draw materials to screen
}
}
}
The code for heatFlow()
is also of interest.
In summary, the function makes two sweeps through elementArr
, once horizontally and once vertically.
For each pair of elements, the heat flux is calculated and the energy of the elements is incremented.
func initMaterials() {
...
for i := range conductivity {
// multiply by time interval
conductivity[i] *= 0.05
}
...
}
func heatFlow() {
// simulate heat flow horizontally
for i := 0; i < heatMapH; i++ {
for j := 1; j < heatMapW; j++ {
a, b := elementArr[i][j], elementArr[i][j-1]
q := conductivity[a.material|b.material] *
(a.Temperature() - b.Temperature())
elementArr[i][j].energy = a.energy - q
elementArr[i][j-1].energy = b.energy + q
}
}
// simulate heat flow vertically
for i := 1; i < heatMapH; i++ {
for j := 0; j < heatMapW; j++ {
a, b := elementArr[i][j], elementArr[i-1][j]
q := conductivity[a.material|b.material] *
(a.Temperature() - b.Temperature())
elementArr[i][j].energy = a.energy - q
elementArr[i-1][j].energy = b.energy + q
}
}
}
Explaining the rest of the code is beyond the scope of this post, so I’ll skip to the showcase of the final program:
Did the optimizations pay off?
By putting timers inside the for program_running
loop, we can extract benchmarks for our code.
Of course the frame rate decreases when the program simulates temperature flow
Resolution | MATERIAL_VIEW |
TEMPERATURE_VIEW |
---|---|---|
854x480 (480p SD) | 3ms, 333 FPS | 9.5 ms, 105 FPS |
1920x1080 (1080p HD) | 14ms, 71 FPS | 50 ms, 20 FPS |
This is far better than I expected and genuinely surprised me. Whilst my monitor certainly can’t display anything at 333 FPS, I’m pleased that I’ve managed to squeeze this much performance out of a 600 line program.
The source code for this project is openly available on github, so you can run this program on your computer if you wish.
Concluding Remarks
Reflecting on this project, it’s taken alot of time to complete and I’ve really enjoyed it.
If there’s one thing I’ve learnt, it’s that simulating temperature far more involved than “exponential decay” style maths questions would have you believe. There’s a fair few shortcuts we’ve taken in our analysis too — operating in only two dimensions, using isotropic materials, ignoring convection and radiation. Expanding on this by including numerical methods to find the steady state temperature would be also useful.
-
The exact steady state temperature can actually be determined with calculus. 3Blue1Brown does a incredible job of explaining this in the case of 1-Dimensional temperature flow in a uniform material. ↩︎
-
Typical float multiplications take 6 cycles. My intel i7 processor takes 35 cycles to access data from the L3 cache. ↩︎